Conditions for Convergence in Regularized Machine Learning Objectives

نویسندگان

  • Patrick Hop
  • Xinghao Pan
چکیده

Analysis of the convergence rates of modern convex optimization algorithms can be achived through binary means: analysis of emperical convergence, or analysis of theoretical convergence. These two pathways of capturing information diverge in efficacy when moving to the world of distributed computing, due to the introduction of non-intuitive, non-linear slowdowns associated with broadcasting, and in some cases, gathering operations. Despite these nuances in the rates of convergence, we can still show the existence of convergence, and lower bounds for the rates. This paper will serve as a helpful cheat-sheet for machine learning practitioners encountering this problem class in the field.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stability Analysis for Regularized Least Squares Regression

We discuss stability for a class of learning algorithms with respect to noisy labels. The algorithms we consider are for regression, and they involve the minimization of regularized risk functionals, such as L(f) := 1 N PN i=1(f(xi) yi)+ kfkH. We shall call the algorithm ‘stable’ if, when yi is a noisy version of f (xi) for some function f 2 H, the output of the algorithm converges to f as the ...

متن کامل

Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation

We analyze the convergence behaviour of a recently proposed algorithm for regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is based on a new interpretation of DAL as a proximal minimization algorithm. We theoretically show under some conditions that DAL converges super-linearly in a non-asymptotic and global sense. Due to a special modelling of sparse estimation probl...

متن کامل

A Different Type of Convergence for Statistical Learning Algorithms

We discuss stability for a class of learning algorithms with respect to noisy labels. The algorithms we consider are for regression, and they involve the minimization of regularized risk functionals, such as L(f) := 1 N PN i=1(f(xi)− yi) +λ‖f‖H. We shall call the algorithm ‘stable’ if, when yi is a noisy version of f(xi) for some function f ∈ H, the output of the algorithm converges to f as the...

متن کامل

Proximal Algorithms in Statistics and Machine Learning

In this paper we develop proximal methods for statistical learning. Proximal point algorithms are useful in statistics and machine learning for obtaining optimization solutions for composite functions. Our approach exploits closedform solutions of proximal operators and envelope representations based on the Moreau, Forward-Backward, Douglas-Rachford and Half-Quadratic envelopes. Envelope repres...

متن کامل

On the Rate of Convergence of Regularized Boosting Classifiers

A regularized boosting method is introduced, for which regularization is obtained through a penalization function. It is shown through oracle inequalities that this method is model adaptive. The rate of convergence of the probability of misclassification is investigated. It is shown that for quite a large class of distributions, the probability of error converges to the Bayes risk at a rate fas...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1305.4081  شماره 

صفحات  -

تاریخ انتشار 2013